3 Dati a livello regionale

A livello regionale sono evidenti i fatti di cui sentiamo parlare quotidianamente sulla diffusione eterogenea del virus sul territorio nazionale, come mostrato in Figura @ref{fig:regioni-casi}.

Figure 3.1: Rappresentazione del numero di casi per regione

Il resto a seguire…

%% indicatrice

3.1 Uniform distribution on disc and on circumference

In this section we study the uniform distribution on the disc and on the circumference, in these cases many properties have a simple intuitive meaning. In the following section we will generalize these distribution to the \(d\)–dimensional case.

###Uniform distribution on the disc}

In the following we will denote by \(B_{\boldsymbol x}(r)\) the closed ball centered in \(\boldsymbol x\) and with radius \(r\), that is \(B_{\boldsymbol x}(r):= \left\{ \boldsymbol y\in \mathbb R^2: \left\| \boldsymbol x - \boldsymbol y \right\| \le r \right\}\). In general, the definition of uniform distribution on the ball is easy:

Definition 3.1 A random array \(\boldsymbol X=\left( X_1,X_2 \right)\) has uniform distribution on the disc \(B_0(1)\subset \mathbb R^2\), if it has absolutely continuous distribution and density \[ f_{\boldsymbol X}(x_1,x_2) = \frac{1}{\pi}\, {\hbox{\rm 1 \kern-6.4truept l}}_{B_{0}(1)}(x_1,x_2). \] In this case we write \(\boldsymbol X\sim \mathscr U\left( B_{0}(1) \right)\).

It is obvious that, if \(\boldsymbol X\sim \mathscr U\left( B_{0}(1) \right)\), then the two marginal distributions are equal and, for all \(x\in [-1,1]\), we have \[ \begin{split} \mathbb P\left( X_1 \le x \right) &= \frac{1}{\pi}\, \int_{-1}^x d x_1 \int_{-\sqrt{1-x_1^2}}^{\sqrt{1-x_1^2}} d x_2 % = \frac{2}{\pi}\,\int_{-1}^x\sqrt{1-x_1^2} d x_1 % &= \frac{1}{\pi}\, \left( x_1 \sqrt{1- x_1^2} + \arcsin x_1 \right|_{-1}^x\\ = \frac{1}{2} + \frac{1}{\pi}\, \left[ x \sqrt{1- x^2} + \arcsin x \right] \end{split} \] and the marginal density is \[ f_{X_1}(x_1) = \frac{2}{\pi} \, \sqrt{1-x_1^2} {\hbox{\rm 1 \kern-6.4truept l}}_{[-1,1]}(x). \] The distribution function and the density function are represented in Figure .

The above definition is immediately generalized to the \(d\)–dimensional case, once one recall that the volume of \(d\)–dimensional sphere of radius 1 is \(\pi^{d/2}/ \Gamma(d/2+1)\):
Definition 3.2 A random array \(\boldsymbol X=\left( X_1,\dots, X_d \right)\) has uniform distribution on the disc \(B_0(1)\subset \mathbb R^d\), if it has absolutely continuous distribution and density \[ f_{\boldsymbol X}(x_1,\dots, x_d) = \frac{\Gamma\left( \frac{d}{2}+1 \right)}{\pi^{d/2}}\, {\hbox{\rm 1 \kern-6.4truept l}}_{B_{0}(1)}(x_1,\dots, x_d). \] In this case we write \(\boldsymbol X\sim \mathscr U_d\left( B_{0}(1) \right)\).

In the following, given a random vector \(\boldsymbol X = \left( X_1,\dots,X_d \right)\) and a vector \(\boldsymbol i=(i_1,\dots,i_k), \, k < d\) of distinct indices, we denote by \(\boldsymbol X_{\boldsymbol i}\) the vector \(\left( X_{i_1},\dots,X_{i_k} \right)\) and by \(\boldsymbol X_{- \boldsymbol i}\) the vector containing the other elements of \(\boldsymbol X\).

Proposition 3.1 If \(\boldsymbol X \sim \mathscr U_d\left( B_{0}(1) \right)\), then

 * for $i=1,2$ the random variables $X_i$ have continuous distribution with density 
    \[
    f_{X_i}(x) = \frac{2}{\pi}\, \sqrt{1 - x^2} \Chi_{[-1,1]}(\boldsymbol x),\, \boldsymbol x\in \R^2;
    \]
* $X_i | X_{-i}= x_{-i} \sim \U\left( \left[ -\sqrt{1-x_{-i}^2},\sqrt{1-x_{-i}^2} \right] \right)$, this means that the distribution of $X_{i}$ given that $X_{-i}=x_{-i}$ is a $\U\left( \left[ -\sqrt{1-x_{-i}^2},\sqrt{1-x_{-i}^2} \right] \right)$;
* $\boldsymbol X$ has the following characteristic function: for all $\boldsymbol z\in \R^2$ $\mu_{\boldsymbol X}(\boldsymbol z) = \E\left[ e^{i \left< \boldsymbol z , \boldsymbol X \right> } \right] =\frac{2 \bessel 1 z}{z}$ where $z:=\left\| \boldsymbol z \right\|$ and $J_n$ is the Bessel function of the first kind of order $n$.</div>\EndKnitrBlock{proposition}
We have already proved the first claim, then we have to prove the other two. For the second point Consideriamo la distribuzione di $X_1$ condizionatamente a $X_2$, naturalmente $X_1$ può assumere valori solo nell'intervallo $[-\sqrt{1-x_2^2},\sqrt{1-x_2^2}]$ e la densità condizionata è

\[ f_{X_1|X_2=x_2}(x_1) = \frac{1/\pi}{2/\pi \sqrt{1-x_2^2}}\, {\hbox{\rm 1 \kern-6.4truept l}}_{[-\sqrt{1-x_2^2},\sqrt{1-x_2^2}]}(x_1) = \frac{1}{2 \sqrt{1-x_2^2}}\, {\hbox{\rm 1 \kern-6.4truept l}}_{[-\sqrt{1-x_2^2},\sqrt{1-x_2^2}]}(x_1) \] che quindi è la densità di una variabile aleatoria uniforme su \([-\sqrt{1-x_2^2},\sqrt{1-x_2^2}]\). Intuitivamente, sapendo che valore assume \(X_2\) l’unica informazione aggiunta su \(X_1\) è che può assumere valori in \([-\sqrt{1-x_2^2},\sqrt{1-x_2^2}]\), ma non ci dice nulla sulla forma della distribuzione.

Finally, for the third point, Calcoliamo la funzione caratteristica del vettore uniforme. A tal fine ricordiamo che la distribuzione del vettore è invariante per rotazione, cos'i come nel caso gaussiano, allora la funzione caratteristica \(\hat{\mu}_{\boldsymbol X}(\boldsymbol z)\) dipende solo da \(\left\| \boldsymbol z \right\|\), quindi possiamo caratterizzare la trasformata conoscendola solo su vettori \(\boldsymbol z= (0,z)\). Quindi, dato un vettore \(\boldsymbol z\), indicando con \(z\) la sua norma, si ha \[\begin{equation} \begin{split} \hat{\mu}_{\boldsymbol X} (z) &= \mathbb E\,\left[ \exp\left\{ i z X_2 \right\} \right] = \frac{1}{\pi}\,\int_{B_0(1)} e^{i x_2} \, d \boldsymbol x = \frac{1}{\pi}\,\int_0^1 \rho d \rho \int_0^{2\pi} d \theta e^{i \rho z \sin(\theta)} = 2 \,\int_0^1 \rho J_0(\rho z) d \rho \end{split} \label{eq:fc_uni} \end{equation}\] in cui \(J_n\) è la funzione di Bessel di primo tipo, che ha la seguente rappresentazione integrale \[ J_n(v) = \frac{1}{2\pi} \int_{-\pi}^{\pi} \exp\left\{ - i (n \theta - v \sin (\theta) ) \right\} \, d \theta. \]

Uno nota relazione lega le derivate delle funzioni di Bessel con le funzioni di ordini diversi: \[ \frac{\:{\mbox d}\hspace{1.5pt}}{\:{\mbox d}\hspace{1.5pt}v }\left( v^n J_n(v) \right) = v^{n} J_{n-1}(v) \] sfruttando questa espressione possiamo calcolare l’integrale in , infatti \[ \frac{\:{\mbox d}\hspace{1.5pt}}{\:{\mbox d}\hspace{1.5pt}v }\left( v J_1(v) \right) = v J_{0}(v) \] da cui segue che \[\begin{equation} \hat{\mu}_{\boldsymbol X} (z) = 2 \frac{z J_1(z)}{z^2} = 2 \frac{J_1(z)}{z}. \label{eq:fc_bessel} \end{equation}\]





\begin{esercizio}
Let $\boldsymbol U\sim \U\left( \ball 0 1 \right)$

\begin{listanumerata}
\item let $R\sim Unif[0,1]$ e $\Theta\sim Unif\left[ 0,2 \pi \right]$, given $\boldsymbol V:= \left[ R \cos(\Theta), R \sin\left( \Theta \right) \right]$ prove that $\boldsymbol V \stackrel{d}{\not =} \boldsymbol U\left( \ball 0 1 \right)$ and that $\left[ \sqrt{R} \cos(\Theta), \sqrt{R} \sin\left( \Theta \right) \right]\stackrel{d}{=} \boldsymbol U\left( \ball 0 1 \right)$ ha distribuzione uniforme.
\item Prove that $U_i^2 \sim Beta\left( \frac{1}{2}, \frac{3/2}{} \right)$.
\item Exploit the point 1. of \refprop{prop:u2_properties} to prove point 3.
\end{listanumerata}
\end{esercizio}

###Uniform distribution on circumeference and sphere}

Now we introduce the definition of uniform distribution on $\partial \ball 0 1$, that is on the boundary of $\ball 0 1$. We remark that we can not follow the same approach of the distribution on the disc, indeed $\partial \ball 0 1$ has dimension one, then the distribution can not have a density. The natural definition requires that all the arcs with the same lengths have the same proability, then we introduce the following definition


\BeginKnitrBlock{definition}<div class="definition"><span class="definition" id="def:unnamed-chunk-9"><strong>(\#def:unnamed-chunk-9) </strong></span>  Given a random array $\boldsymbol U$, we say that $\boldsymbol U$ has a uniform distribution on the \textit{circumference} if 
    \[
\boldsymbol U := 
\begin{bmatrix}
    U_1\\
    U_2\\
\end{bmatrix}
\stackrel{d}{=}
\begin{bmatrix}
    \cos (\Theta)\\
    \sin (\Theta)
\end{bmatrix}
\]
where $\Theta \sim Unif(0,2\pi]$. </div>\EndKnitrBlock{definition}
%Vediamo ora le differenze con la distribuzione uniforme sulla circonferenza. Il modo più naturale per introdurre questa distribuzione è con il vettore aleatorio che si ottiene dalla trasformazione della v.a. nel modo seguente:
%\[
%\boldsymbol U := 
%\begin{bmatrix}
%   U_1\\
%   U_2\\
%\end{bmatrix}
%=
%\begin{bmatrix}
%   \cos (\Theta)\\
%   \sin (\Theta)
%\end{bmatrix}
%\]
In this case it is obvious that $\boldsymbol U$ belongs to different arcs with the same probability if only if these have the same length. By the definition it is also clear that the distribution of $\boldsymbol U$ can not be absolute continuous: once we know one coordinate, we know exactly the other coordinate too or the support of the distribution has Lebesgue measure zero.

Now, we introduce the analogous of Proposition \ref{prop:u2_properties}
```{proposition}\label{prop:circ_properties}
    If $\boldsymbol U \sim \U\left( \partial \ball 0 1 \right)$, then

    \begin{listanumerata}
    \item for $i=1,2$ the random variables $X_i$ have continuous distribution with density 
        \[
        f_{U_1}(x) = \frac{1}{\pi \sqrt{1- x^2}}\, \Chi_{[-1,1]}(x);
        \]
    \item $U_i | U_{-i}= u_{-i} \sim \U\left( \left\{ -\sqrt{1-u_{-i}^2},\sqrt{1-u_{-i}^2} \right\} \right)$;
    \item $\boldsymbol U$ has the following characteristic function: for all $\boldsymbol z\in \R^2$ $\varphi_{\boldsymbol U}(\boldsymbol z) =  J_0(z)$ where $z:=\left\| \boldsymbol z \right\|$.
    \end{listanumerata}
\end{prop}

\BeginKnitrBlock{proof}<div class="proof">\iffalse{} <span class="proof"><em>Proof. </em></span>  \fi{} 
It is easy to calculate the marginal distribution of the random vector: by definition $U_1 = \cos\left( \Theta \right)$, then $U_1\in [-1,1]$ and for all $x\in \left[ -1,1 \right]$ we have
\[
\begin{split}
    \P\left( U_1 \le x \right) &= 2 \P\left( \Theta \le \pi , \cos\left( \Theta \right) \le x \right)
            = 2 \P\left( \arccos\left( x \right) \le \Theta \le \pi \right)
            = 1 - \frac{\arccos(x)}{\pi}.
\end{split}
\]
Then the marginal distribution, despite the discontinuity of the full distribution, is absolutely continuous and it has the following density
\[
f_{U_1}(x) = \frac{1}{\pi \sqrt{1- x^2}}\, \Chi_{[-1,1]}(x).
\]

Calcoliamo la funzione caratteristica, a tal fine osserviamo ancora una volta che la distribuzione del vettore è invariante per rotazione, quindi la funzione caratteristica dipende solo dalla norma dell'argomento. Fissato $\boldsymbol z\in \R^2$, indicando con $z$ la sua norma, abbiamo
\begin{equation}
\begin{split}
\varphi_{\boldsymbol U}(\boldsymbol z) &= \E\left[ \exp\left\{ i z_1 U_1 + i z_2 U_2\right\} \right]
    = \E\left[ \exp\left\{ i z U_1 \right\} \right]
    = \int_{-1}^1 \frac{e^{i z y}}{\pi \sqrt{1 - y^2}} \dd y\\
    &= \frac{1}{\Gamma\left( 1/2 \right)^2} \, \int_{-1}^1 e^{i z y} (1-y^2)^{-1/2} \dd y
    =  J_0(z) 
\end{split}
    \label{eq:fc_uni_circ}
\end{equation}
in cui l'ultima uguaglianza segue dalla 8.411 10 del \cite{gradshteyn:2007} e $J$ è la funzione di Bessel $J$ o di primo tipo.
</div>\EndKnitrBlock{proof}


\begin{esercizio}
    Dimostrare che $f_{U_2}=f_{U_1}$. 
\end{esercizio}

\begin{remark}
    We underline that the marginals distributions are continuous on the other hand the conditionals are uniform on two points, then discrete. Furthermore, we notice that, as one should expect, the kernel density of the marginals are the inverse of that of the uniform on the disc.
\end{remark}

The above definition of uniform distirbution is not suitable to extend it to the $d$-dimensional case, at least from a technical point of view, indeed to this aim have to parametrize the sphere in polar coordinates in $\R^d$ and assigning independents uniform distributions on the angles of the parametrization.

Instead, to generalize the definition the easiest way is through the following characterization of the bidimensional uniform distribution is fundamental:

```{theorem}\label{teo:gauss_unif}
A random vector $\boldsymbol U$ has a distribution $\U_2\left( \partial \ball 0 1 \right)$ if and only if $\boldsymbol U \stackrel{d}{=} \left( \frac{X_1}{\left\| \boldsymbol X \right\| } , \frac{X_2}{\left\| \boldsymbol X \right\| } \right)$ where $X_1,X_2\stackrel{iid}{\sim} Gauss(0,1)$, then $\sim Unif\left( \partial \ball 0 1 \right)$.

Proof. First of all we notice that the support of the random vector distribution is the \(\ell^{2}\) sphere, then to prove the assertion is sufficient to show that the density of the first component is equal to that of \(U_1\).

We remark that, by construction the distribution is symmetric about zero, then we characterize the distribution on the positive orthant: given $u > 0$ we have:
\[
\begin{split}
    \P\left( U_1 <= u \right) &= \frac{1}{2} + \P\left( 0 \le U_1 \le u \right)
    = \frac{1}{2} + \P\left( 0 \le U_1^2 \le u^2  \right)
\end{split}
\]
then we can find the distribution calculating the distribution of 
\[
U_1^2 = \frac{X_1^2}{X_1^2 + X_2^2} 
 = \frac{Y_1}{Y_1 + Y_2}    
\]
where $Y_1, Y_2$ are iid $\chi^2_{(1)}$.

To calculate this density we study the following trasformation: $\boldsymbol v = f\left( y_1, y_2 \right) := \left( y_1/(y_1+ y_2), y_2 \right)$, this application is bijective from $\R_+^2$ to $[0,1]\times \R_+$ and its inverse is $f^{-1}(\boldsymbol v) = \left( v_1 v_2 / (1 - v_1), v_2 \right)$, then the determinant of the Jacobian of the transformation is $det\left( Jf^{-1} \right) = v_2 / (1 - v_1)^2$. Then the joint distribution of the transformation is 
\[
\begin{split}
    g(\boldsymbol v) &= h\left( \frac{v_1 v_2}{1 - v_1} \right) h\left( v_2 \right) \frac{v_2}{\left( 1 - v_1 \right)^2}
    = \frac{1}{4 \pi} \, \frac{1}{\sqrt{v_1} \left( 1 - v_1 \right)^{3/2}}\, \exp\left\{ - \frac{v_2}{2(1 - v_1)} \right\}
\end{split}
\]
where $h$ is the density of a $\chi^2_{(1)}$ random variable and $\boldsymbol v \in [0,1]\times \R_+$.

Now, to obtain the distribution of $V_1$ we have to marginalize this distribution wrt $v_2$:
\[
f_{V_1}\left( v_1 \right) = \int_0^\infty \frac{1}{4 \pi} \, \frac{1}{\sqrt{v_1} \left( 1 - v_1 \right)^{3/2}}\, \exp\left\{ - \frac{v_2}{2(1 - v_1)} \right\} \d v_2
= \frac{1}{2 \pi \sqrt{v_1 (1- v_1)}}.
\]
Finally, the distribution of $U_1$ is obtained taking the square root of $V_1$, then its density is given by
\[
f_{U_1}(u_1) = f_{V_1}\left( u_1^2 \right) 2 u_1
= \frac{u_1}{\pi \sqrt{u_1^2 \left( 1 - u_1^2 \right)}}
= \frac{1}{\pi \sqrt{ 1 - u_1^2}}.
\]</div>\EndKnitrBlock{proof}

This property is the essential one to extend the two–dimensional definition to the general case:

Definition 3.3 Diciamo che un vettore aleatorio \(\boldsymbol U\in \mathbb R^d\) ha distribuzione uniforme sulla frontiera di \(B_{d}^{(2)}\) o su \(S_d\), cioè la sfera di \(\mathbb R^d\) in norma \(\ell^2_d\), se \[ \boldsymbol U \stackrel{{\scriptscriptstyle{d}}}{=} \left( \frac{Z_1}{\sqrt{\sum_{i=1}^{d} Z_i^2}}, \frac{Z_2}{\sqrt{\sum_{i=1}^{d} Z_i^2}},\dots, \frac{Z_d}{\sqrt{\sum_{i=1}^{d} Z_i^2}}\right) \] in cui \(Z_i \stackrel{{\scriptscriptstyle{ {\scriptscriptstyle i.i.d.}}}}{\sim}Gauss(0,1)\).

Exploiting this definition is (quite) easy to calculate the marginal distribution of \(\boldsymbol U\). For example, generalizing the result of Theorem , to calculate the one–dimensional marginal distribution, we observe that \(U_1^2 = Z_1^2/(Z_1^2+\sum_{i=2}^{d} Z_i^2)\), but this is only the ratio of a \(\chi^2_{(1)}\) and the sum of this one and \(\chi^2_{(d-1)}\) indipendent by \(Z_1^2\), then it has a one–dimensional Dirichlet distribution, that is a \(Beta\left( 1/2, (d-1)/2 \right)\).

Furthermore, from this remark and the symmetry of the distribution of \(U_1\), we can conclude that \[\begin{equation} f_{U_1}(u) = \frac{1}{B(1/2,(d-1)/2)}\, \left( 1- u^2 \right)^{\frac{d-3}{2}} {\hbox{\rm 1 \kern-6.4truept l}}_{(-1,1)}(u). \label{eq:uniform_marginal} \end{equation}\] This implies (obviously) \(\mathbb E\,\left[ U_1 \right] = 0\) and \(Var\left( U_1 \right)= \mathbb E\,\left[ U_1^2 \right] =1/d\) and, more generally, the moments of \(U_1\): obviously \(\mathbb E\,\left[ U_1^{2k +1} \right]=0\) for all \(k\ge 0\) and \[\begin{equation} \mathbb E\,\left[ U_1^{2k} \right] = \mathbb E\,\left[ \left( U_1^2 \right)^{k} \right] = \frac{B\left( \frac{1}{2} + k , \frac{d-1}{2} \right)}{B\left( \frac{1}{2}, \frac{d-1}{2} \right)} = \frac{\left( \frac{1}{2} \right)_{\left( k \right)}}{\left( \frac{d-1}{2} \right)_{\left( k \right)}} \label{eq:uniform_marg_moments} \end{equation}\] in cui \(\left( a \right)_{\left( k \right)} := a(a+1)\dots(a+k-1)\).

Finally, the characteristic function of univariate marginal distributions

    If $\boldsymbol U\sim \U\left( S_d \right)$, then 
    \[
    \varphi_{U_1}(z) = \Gamma\left( \frac{d}{2} \right) \left( \frac{2}{\left\| z \right\|} \right)^{\frac{d}{2}-1} J_{\frac{d}{2}-1}(\left\| z \right\|).
    \]
\end{prop}
\BeginKnitrBlock{proof}<div class="proof">\iffalse{} <span class="proof"><em>Proof. </em></span>  \fi{} We have
    \[
    \begin{split}
    \varphi_{U_1}(z) &= \E\left[ \exp\left\{ i z U_1 \right\} \right]
    = \frac{1}{B\left( \frac{1}{2}, \frac{d-1}{2} \right)}\, \int_{-1}^1 e^{i t u} (1-u^2)^{\frac{d-3}{2}} \d u\\
    &= \frac{1}{B\left( \frac{1}{2}, \frac{d-1}{2} \right)}\, \int_{-1}^1 \cos\left( \theta \right) (1-u^2)^{\left( \frac{d}{2}-1 \right) -\frac{1}{2}} \d u
    = \Gamma\left( \frac{d}{2} \right) \left( \frac{2}{z} \right)^{\frac{d}{2}-1} J_{\frac{d}{2}-1}(z)
    \end{split}
    \]
    where the last equality follows from 8.411.8 of \cite{gradshteyn:2007}.</div>\EndKnitrBlock{proof}

The following theorem generalizes previous result to a generic subvector of $\boldsymbol U$:
```{theorem}\label{thm:uniform_sphere_marginal}
    If $\boldsymbol U \sim \U_d\left( \ball 0 1 \right)$ and $i_1, \dots, i_k \in \left\{ 1, 2, \dots, d \right\}$ are distincts indices, then
    \[
    \left( U_{i_1}^2, U_{i_2}^2, \dots, U_{i_k}^2 \right) \sim Dir_k\left( 1/2, \dots, 1/2, (d - k) /2 \right)
    \]
    and the joint density of $\left( U_{i_1}^2, U_{i_2}^2, \dots, U_{i_k}^2 \right) $ is
    \[
    f_{\boldsymbol U_k}\left( u_{i_1},\dots, u_{i_k} \right) = \frac{(d-1)!}{\pi^{k/2} \Gamma\left( (d-k)/2 \right)} \, \left( 1 - \sum_{j = 1}^{k} u_{i_j}^2 \right)^{(d - k - 2)/2}
    \]
    for all $\left\{ \left( u_{i_1},\dots, u_{i_k} \right):  u_{i_j} \ge 0, \sum_{j = 1}^{k} u_{i_j}^2 \le 1\right\}$.
Proof. The proof is analogous to the one dimensional one: each \(U_{i_j}^2\) is the ratio of independent \(\chi^2_{(1)}\) distribution and their sum, from this we get the Dirichlet distribution. To find the density it is sufficient to apply a square root transformation to the squared vector and choose the sign at random.

A simple but fundamental result is the invariance wrt orthogonal transformation of the vector: let \(\mathscr O(d)\) the orthogonal group on \(\mathbb R^d\),

Theorem 3.1 If \(\boldsymbol U \sim \mathscr U_d\left( \partial B_{0}(1) \right)\), then \(\Gamma \boldsymbol U \stackrel{d}{=} \boldsymbol U\) for all \(\Gamma \in \mathscr O(d)\).

The proof is immediate, indeed this property is inherited from the analogous one for iid centered gaussian random variables, but give some hints for the generalization of the next section.